wip: group RUN's, conditionalize starting of services#18
wip: group RUN's, conditionalize starting of services#18ibotty wants to merge 11 commits intogluster:masterfrom
Conversation
|
"wip" is not the right label. It's more a "RFC". If you want to, I can localize the commits into smaller parts. I can also bring the fedora-based image up to date and let it conform to the centos-one, if so desired. |
|
@ibotty I apologize for the delay in getting back to you. Its really appreciated if you can make it in different PRs. so that easy to bring in the changes. Thanks !!! |
|
NP. I'll rework it in the following weeks. Btw: is there a test suite available somewhere? |
|
@ibotty we have some internal tests, but no upstream. I will look into this. Also appreciated if you can break the commits and send PR. |
MohamedAshiqrh
left a comment
There was a problem hiding this comment.
Nice Work @ibotty :-) .
| var=$((var+1)) | ||
| echo "$dir is not empty" | ||
| else | ||
| if ! cp -r ${dir}_bkp/* $dir |
There was a problem hiding this comment.
I feel we should copy only if all the directory in the list are empty. Say if they miss a volume -v /var/lib/glusterd:/var/lib/glusterd:z, In this case it will copy the default options and start the container which is not IMO is persisting the state of container. I feel It should fail in these cases and let the user notified that failed due to this directory.
|
|
||
| enable_start_unit_if_env() { | ||
| local unit="$1" | ||
| local env_var="$1" |
There was a problem hiding this comment.
Shouldn't this be $2, let me know if I am wrong.
| RUN systemctl enable gluster-setup.service | ||
|
|
||
| EXPOSE 2222 111 245 443 24007 2049 8080 6010 6011 6012 38465 38466 38468 38469 49152 49153 49154 49156 49157 49158 49159 49160 49161 49162 | ||
| RUN yum --setopt=tsflags=nodocs -y install centos-release-gluster epel-release && \ |
There was a problem hiding this comment.
Can you add yum --setopt=tsflags=nodocs -y update ? @humblec what do you think will we need this?
There was a problem hiding this comment.
AFAICT there is no consensus re yum update within Dockerfiles. (See e.g. this discussion). I tend not to include them if the base image gets updated regularly. I rely on my build triggers in that case. Note that if the image is not updated, you still have to make sure to rebuild the image. And how should you know you'll have to do that :/.
The advantage of not using yum update is reproducibility.
There was a problem hiding this comment.
Agreed. So we will depend on the image rebuilds and trigger accordingly. BTW link"this discussion" points to 404 page.
There was a problem hiding this comment.
I never get the github issue links right. corrected above.
| openssh-server openssh-clients ntp rsync tar cronie sudo xfsprogs \ | ||
| glusterfs glusterfs-server glusterfs-geo-replication && \ | ||
| yum clean all && \ | ||
| (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done) && \ |
There was a problem hiding this comment.
I am not sure but doesn't this wipe the service files of gluster too, since its done after installation of the packages.
There was a problem hiding this comment.
Well, yes. It wipes the annotation that they are to be started (if they start automatically). The unit files themselves don't reside there.
There was a problem hiding this comment.
Since we copy our service file here thought everyone will be doing the same.
There was a problem hiding this comment.
But that one does not get deleted. Only the unit links in xxx.wants get deleted. Maybe I am missing things.
There was a problem hiding this comment.
Oh yeah my bad did not notice the "xxx.wants" there. Thanks
| yum clean all && \ | ||
| (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done) && \ | ||
| rm -f /lib/systemd/system/multi-user.target.wants/* && \ | ||
| rm -f /etc/systemd/system/*.wants/* && \ |
There was a problem hiding this comment.
I like the way && is used instead of ; .
|
Sorry for not getting back. I will incorporate the comments, split it up and give it a test ride, hopefully next week. |
|
@ibotty sure. appreciate your contribution. |
|
@ibotty looks like u r busy. :). Please let us know if you would like us to move on this with next iteration of this patch. |
|
Oh, Sorry. Yes, I was and am busy. I did run into too many problems with lvm getting confused between host and docker container that I installed in the host directly. So it will be hard for me to dedicate time to this. Sorry for not communicating! |
|
@ibotty @MohamedAshiqrh we use LVM in our setup without issues among host and container. Not sure what issues you are facing in ur setup. Any way we will revisit the PR. Thanks! |
|
I don't have the details handy, but LVM's config files in the container and on the host were not consistent, so I got the warning about having to repair a thin pool. On the end instead of debugging that further, I installed on the host. I would still be willing to rebase this PR, but I won't be able to test the changes. If you are interested, I will do that soon-ish. |
No description provided.